125 research outputs found

    The troubled journey of QoS: From ATM to content networking, edge-computing and distributed internet governance

    Get PDF
    Network Quality of Service (QoS) and the associated user Quality of Experience (QoE) have always been the networking “holy grail” and have been sought after through various different approaches and networking technologies over the last decades. Despite substantial amounts of effort invested in the area, there has been very little actual deployment of mechanisms to guarantee QoS in the Internet. As a result, the Internet is largely operating on a “best effort” basis in terms of QoS. Here, we attempt a historical overview in order to better understand how we got to the point where we are today and consider the evolution of QoS/QoE in the future. As we move towards more demanding networking environments where enormous amounts of data is produced at the edge of the network (e.g., from IoT devices), computation will also need to migrate to the edge in order to guarantee QoS. In turn, we argue that distributed computing at the edge of the network will inevitably require infrastructure decentralisation. That said, trust to the infrastructure provider is more difficult to guarantee and new components need to be incorporated into the Internet landscape in order to be able to support emerging applications, but also achieve acceptable service quality. We start from the first steps of ATM and related IP-based technologies, we consider recent proposals for content-oriented and Information-Centric Networking, mobile edge and fog computing, and finally we see how distributed Internet governance through Distributed Ledger Technology and blockchains can influence QoS in future networks

    NFaaS: Named Function as a Service

    Get PDF
    In the past, the Information-centric networking (ICN) community has focused on issues mainly pertaining to traditional content delivery (e.g., routing and forwarding scalability, congestion control and in-network caching). However, to keep up with future Internet architectural trends the wider area of future Internet paradigms, there is a pressing need to support edge/fog computing environments, where cloud functionality is available more proximate to where the data is generated and needs processing. With this goal in mind, we propose Named Function as a Service (NFaaS), a framework that extends the Named Data Networking architecture to support in-network function execution. In contrast to existing works, NFaaSbuilds on very lightweight VMs and allows for dynamic execution of custom code. Functions can be downloaded and run by any node in the network. Functions can move between nodes according to user demand, making resolution of moving functions a first-class challenge. NFaaSincludes a Kernel Store component, which is responsible not only for storing functions, but also for making decisions on which functions to run locally. NFaaSincludes a routing protocol and a number of forwarding strategies to deploy and dynamically migrate functions within the network. We validate our design through extensive simulations, which show that delay-sensitive functions are deployed closer to the edge, while less delay-sensitive ones closer to the core

    Revisiting Resource Pooling: The Case for In-Network Resource Sharing.

    Get PDF
    We question the widely adopted view of in-network caches acting as temporary storage for the most popular content in Information-Centric Networks (ICN). Instead, we propose that in-network storage is used as a place of temporary custody for incoming content in a store and forward manner. Given this functionality of in-network storage, senders push content into the network in an open-loop manner to take advantage of underutilised links. When content hits the bottleneck link it gets re-routed through alternative uncongested paths. If alternative paths do not exist, incoming content is temporarily stored in in-network caches, while the system enters a closed-loop, back-pressure mode of operation to avoid congestive collapse. Our proposal follows in spirit the resource pooling principle, which, however, is restricted to end-to-end resources and paths. We extend this principle to also take advantage of in-network resources, in terms of multiplicity of available sub-paths (as compared to multihomed users only) and in-network cache space. We call the proposed principle In-Network Resource Pooling Principle (INRPP). Using the INRPP, congestion, or increased contention over a link, is dealt with locally in a hop-by-hop manner, instead of end-to-end. INRPP utilises resources throughout the network more efficiently and opens up new directions for research in the multipath routing and congestion control areas

    Hash-routing schemes for information centric networking.

    Get PDF
    It is our great pleasure to welcome you to The 3rd ACM SIGCOMM Workshop on Information-Centric Networking (ICN 2013). The fundamental concept in Information-Centric Networking (ICN) is to evolve the Internet from today's host based packet delivery towards directly retrieving information objects by names in a secure, reliable, scalable, and efficient way. These architectural design efforts aim to directly address the challenges that arise from the increasing demands for highly scalable content distribution, from accelerated growths of mobile devices, from wide deployment of the Internet-of-Things (IoT), and from the need to secure the global Internet. Rapid progress has been made over the last few years, initial designs are sketched, new research challenges exposed, and prototype implementations are deployed on testbeds of various scales. The research efforts have reached a new stage that allows one to experiment with proposed architectures and to apply a proposed architectural design to address real world problems. It also becomes important to compare different design approaches and develop methodologies for architecture evaluations. Some research areas, such as routing and caching, have drawn considerable attention; some other areas, such as trust management, effective and efficient application of cryptography, experience from prototyping, and lessons from experimentations, to name a few, have yet to be fully explored. This workshop presents original contributions on Information-Centric Networking architecture topics, specific algorithms and protocols, as well as results from implementations and bexperimentation, with an emphasis on applying the new approach to address real world problems and on experimental investigations. New for this year is that the workshop includes a poster/demo session. We received a large number of submissions and as the workshop is limited in time we were only able to accept 20% of them as full papers. To promote sharing of latest results among workshop attendees, we also accepted 17% of the submissions as posters or demos

    Understanding Sharded Caching Systems

    Get PDF
    Sharding is a method for allocating data items to nodes of a distributed caching or storage system based on the result of a hash function computed on the item identifier. It is ubiquitously used in key-value stores, CDNs and many other applications. Despite considerable work has focused on the design and the implementation of such systems, there is limited understanding of their performance in realistic operational conditions from a theoretical standpoint. In this paper we fill this gap by providing a thorough modeling of sharded caching systems, focusing particularly on load balancing and caching performance aspects. Our analysis provides important insights that can be applied to optimize the design and configuration of sharded caching systems

    Edge Data Repositories - The design of a store-process-send system at the Edge

    Get PDF
    The Edge of the Internet is currently accommodating large numbers of devices and these numbers will dramatically increase with the advancement of technology. Edge devices and their associated service bandwidth requirements are predicted to become a major problem in the near future. As a result, the popularity of data management, analysis and processing at the edges is also increasing. This paper proposes Edge Data Repositories and their performance analysis. In this context, provide a service quality and resource allocation feedback algorithm for the processing and storage capabilities of Edge Data Repositories. A suitable simulation environment was created for this system, with the help of the ONE Simulator. The simulations were further used to evaluate the Edge Data Repository cluster within different scenarios, providing a range of service models. From there, with the help and adaptation of a few basic networks management concepts, the feedback algorithm was developed. As an initial step, we assess and provide measurable performance feedback for the most essential parts of our envisioned system: network metrics and service and resource status, through this algorithm

    Mind the gap: modelling video delivery under expected periods of disconnection.

    Get PDF
    In this work we model video delivery under expected periods of disconnection, such as the ones experienced in public transportation systems. Our main goal is to quantify the gains of users' collaboration in terms of Quality of Experience (QoE) in the context of intermittently available and bandwidth-limited WiFi connectivity. Under the assumption that Wi-Fi connectivity is available within underground stations, but absent between them, at first, we define a mathematical model which describes the content distribution under these conditions and we present the users' QoE function in terms of undisrupted video playback. Next, we expand this model to include the case of collaboration between users for content sharing in a peer-to-peer (P2P) way. Lastly, we evaluate our model based on real data from the London Underground network, where we investigate the feasibility of content distribution, only to find that collaboration between users increases significantly their QoE

    Framework and Algorithms for Operator-Managed Content Caching

    Get PDF
    We propose a complete framework targeting operator-driven content caching that can be equally applied to both ISP-operated Content Delivery Networks (CDNs) and future Information-Centric Networks (ICNs). In contrast to previous proposals in this area, our solution leverages operators’ control on cache placement and content routing, managing to considerably reduce network operating costs by minimizing the amount of transit traffic and balancing load among available network resources. In addition, our solution provides two key advantages over previous proposals. First, it allows for a simple computation of the optimal cache placement. Second, it provides knobs for operators to fine-tune performance. We validate our design through both analytical modeling and trace-driven simulations and show that our proposed solution achieves on average twice as many cache hits in comparison to previously proposed techniques, without increasing delivery latency. In addition, we show that the proposed framework achieves 19-33% better load balancing across links and caching nodes, being also robust to traffic spikes

    In-network cache management and resource allocation for information-centric networks

    Get PDF
    We introduce the concept of resource management for in-network caching environments. We argue that in Information-Centric Networking environments, deterministically caching content messages at predefined places along the content delivery path results in unfair and inefficient content multiplexing between different content flows, as well as in significant caching redundancy. Instead, allocating resources along the path according to content flow characteristics results in better use of network resources and therefore, higher overall performance. The design principles of our proposed in-network caching scheme, which we call ProbCache, target these two outcomes, namely reduction of caching redundancy and fair content flow multiplexing along the delivery path. In particular, ProbCache approximates the caching capability of a path and caches contents probabilistically to: 1) leave caching space for other flows sharing (part of) the same path, and 2) fairly multiplex contents in caches along the path from the server to the client. We elaborate on the content multiplexing fairness of ProbCache and find that it sometimes behaves in favor of content flows connected far away from the source, that is, it gives higher priority to flows travelling longer paths, leaving little space to shorter-path flows. We introduce an enhanced version of the main algorithm that guarantees fair behavior to all participating content flows. We evaluate the proposed schemes in both homogeneous and heterogeneous cache size environments and formulate a framework for resource allocation in in-network caching environments. The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in terms of resource allocation fairness among competing content flows. Finally, and in contrast to the expected behavior, we find that the efficient design of ProbCache results in fast convergence to caching of popular content items

    Named Functions at the Edge

    Get PDF
    As end-user and edge-network devices are becoming ever more powerful, they are producing ever increasing amounts of data. Pulling all this data into the cloud for processing is impossible, not only due to its enormous volume, but also due to the stringent latency requirements of many applications. Instead, we argue that end-user and edge-network devices should collectively form edge computing swarms and complement the cloud with their storage and processing resources. This shift from centralized to edge clouds has the potential to open new horizons for application development, supporting new low-latency services and, ultimately, creating new markets for storage and processing resources. To realize this vision, we propose Named Functions at the Edge (NFE), a platform where functions can i) be identified through a routable name, ii) be requested and moved (as data objects) to process data on demand at edge nodes, iii) pull raw or anonymized data from sensors and devices, iv) securely and privately return their results to the invoker and v) compensate each party for use of their data, storage, communication or computing resources via tracking and accountability mechanisms. We use an emergency evacuation application to motivate the need for NFE and demonstrate its potential
    • …
    corecore